
Data-Driven Dimension Reduction Through Symmetry-Promoting Regularization
Please login to view abstract download link
Reduced order modeling of complex phenomena has become an increasingly necessary tool for efficiently developing digital twins, control design, and rapidly performing studies. Active subspace approaches have been shown to be lightweight and powerful methods for discovering the dominant linear subspaces for science and engineering applications in the low-data limit. However, these approaches either (1) require access to the gradients from the quantities of interest, (2) sufficient quantities of data to estimate the gradients, or (3) rely on linear regression to identify a one-dimensional subspace. In this work, we demonstrate that the presence of an active subspace is equivalent to the presence of a translationally-invariant subspace. By modeling quantities of interest through a convex, symmetry-regularized optimization, we demonstrate that we can discover maximally invariant subspaces and their active subspace counterparts directly from data without explicit access to gradients. In particular, we explore the effectiveness of our approach on a variety of applications in the low-data limit without access to gradients.